292 research outputs found

    Expected exponential loss for gaze-based video and volume ground truth annotation

    Full text link
    Many recent machine learning approaches used in medical imaging are highly reliant on large amounts of image and ground truth data. In the context of object segmentation, pixel-wise annotations are extremely expensive to collect, especially in video and 3D volumes. To reduce this annotation burden, we propose a novel framework to allow annotators to simply observe the object to segment and record where they have looked at with a \$200 eye gaze tracker. Our method then estimates pixel-wise probabilities for the presence of the object throughout the sequence from which we train a classifier in semi-supervised setting using a novel Expected Exponential loss function. We show that our framework provides superior performances on a wide range of medical image settings compared to existing strategies and that our method can be combined with current crowd-sourcing paradigms as well.Comment: 9 pages, 5 figues, MICCAI 2017 - LABELS Worksho

    Probe-based Rapid Hybrid Hyperspectral and Tissue Surface Imaging Aided by Fully Convolutional Networks

    Get PDF
    Tissue surface shape and reflectance spectra provide rich intra-operative information useful in surgical guidance. We propose a hybrid system which displays an endoscopic image with a fast joint inspection of tissue surface shape using structured light (SL) and hyperspectral imaging (HSI). For SL a miniature fibre probe is used to project a coloured spot pattern onto the tissue surface. In HSI mode standard endoscopic illumination is used, with the fibre probe collecting reflected light and encoding the spatial information into a linear format that can be imaged onto the slit of a spectrograph. Correspondence between the arrangement of fibres at the distal and proximal ends of the bundle was found using spectral encoding. Then during pattern decoding, a fully convolutional network (FCN) was used for spot detection, followed by a matching propagation algorithm for spot identification. This method enabled fast reconstruction (12 frames per second) using a GPU. The hyperspectral image was combined with the white light image and the reconstructed surface, showing the spectral information of different areas. Validation of this system using phantom and ex vivo experiments has been demonstrated.Comment: This paper has been submitted to MICCAI2016 on 17 March, 2016, and conditionally accepted on 2 June, 201

    SERV-CT: A disparity dataset from cone-beam CT for validation of endoscopic 3D reconstruction

    Get PDF
    In computer vision, reference datasets from simulation and real outdoor scenes have been highly successful in promoting algorithmic development in stereo reconstruction. Endoscopic stereo reconstruction for surgical scenes gives rise to specific problems, including the lack of clear corner features, highly specular surface properties and the presence of blood and smoke. These issues present difficulties for both stereo reconstruction itself and also for standardised dataset production. Previous datasets have been produced using computed tomography (CT) or structured light reconstruction on phantom or ex vivo models. We present a stereo-endoscopic reconstruction validation dataset based on cone-beam CT (SERV-CT). Two ex vivo small porcine full torso cadavers were placed within the view of the endoscope with both the endoscope and target anatomy visible in the CT scan. Subsequent orientation of the endoscope was manually aligned to match the stereoscopic view and benchmark disparities, depths and occlusions are calculated. The requirement of a CT scan limited the number of stereo pairs to 8 from each ex vivo sample. For the second sample an RGB surface was acquired to aid alignment of smooth, featureless surfaces. Repeated manual alignments showed an RMS disparity accuracy of around 2 pixels and a depth accuracy of about 2 mm. A simplified reference dataset is provided consisting of endoscope image pairs with corresponding calibration, disparities, depths and occlusions covering the majority of the endoscopic image and a range of tissue types, including smooth specular surfaces, as well as significant variation of depth. We assessed the performance of various stereo algorithms from online available repositories. There is a significant variation between algorithms, highlighting some of the challenges of surgical endoscopic images. The SERV-CT dataset provides an easy to use stereoscopic validation for surgical applications with smooth reference disparities and depths covering the majority of the endoscopic image. This complements existing resources well and we hope will aid the development of surgical endoscopic anatomical reconstruction algorithms

    Surgical spectral imaging

    Get PDF
    Recent technological developments have resulted in the availability of miniaturised spectral imaging sensors capable of operating in the multi- (MSI) and hyperspectral imaging (HSI) regimes. Simultaneous advances in image-processing techniques and artificial intelligence (AI), especially in machine learning and deep learning, have made these data-rich modalities highly attractive as a means of extracting biological information non-destructively. Surgery in particular is poised to benefit from this, as spectrally-resolved tissue optical properties can offer enhanced contrast as well as diagnostic and guidance information during interventions. This is particularly relevant for procedures where inherent contrast is low under standard white light visualisation. This review summarises recent work in surgical spectral imaging (SSI) techniques, taken from Pubmed, Google Scholar and arXiv searches spanning the period 2013–2019. New hardware, optimised for use in both open and minimally-invasive surgery (MIS), is described, and recent commercial activity is summarised. Computational approaches to extract spectral information from conventional colour images are reviewed, as tip-mounted cameras become more commonplace in MIS. Model-based and machine learning methods of data analysis are discussed in addition to simulation, phantom and clinical validation experiments. A wide variety of surgical pilot studies are reported but it is apparent that further work is needed to quantify the clinical value of MSI/HSI. The current trend toward data-driven analysis emphasises the importance of widely-available, standardised spectral imaging datasets, which will aid understanding of variability across organs and patients, and drive clinical translation

    Evaluating surgical skills from kinematic data using convolutional neural networks

    Full text link
    The need for automatic surgical skills assessment is increasing, especially because manual feedback from senior surgeons observing junior surgeons is prone to subjectivity and time consuming. Thus, automating surgical skills evaluation is a very important step towards improving surgical practice. In this paper, we designed a Convolutional Neural Network (CNN) to evaluate surgeon skills by extracting patterns in the surgeon motions performed in robotic surgery. The proposed method is validated on the JIGSAWS dataset and achieved very competitive results with 100% accuracy on the suturing and needle passing tasks. While we leveraged from the CNNs efficiency, we also managed to mitigate its black-box effect using class activation map. This feature allows our method to automatically highlight which parts of the surgical task influenced the skill prediction and can be used to explain the classification and to provide personalized feedback to the trainee.Comment: Accepted at MICCAI 201

    Uncertainty-Aware Organ Classification for Surgical Data Science Applications in Laparoscopy

    Get PDF
    Objective: Surgical data science is evolving into a research field that aims to observe everything occurring within and around the treatment process to provide situation-aware data-driven assistance. In the context of endoscopic video analysis, the accurate classification of organs in the field of view of the camera proffers a technical challenge. Herein, we propose a new approach to anatomical structure classification and image tagging that features an intrinsic measure of confidence to estimate its own performance with high reliability and which can be applied to both RGB and multispectral imaging (MI) data. Methods: Organ recognition is performed using a superpixel classification strategy based on textural and reflectance information. Classification confidence is estimated by analyzing the dispersion of class probabilities. Assessment of the proposed technology is performed through a comprehensive in vivo study with seven pigs. Results: When applied to image tagging, mean accuracy in our experiments increased from 65% (RGB) and 80% (MI) to 90% (RGB) and 96% (MI) with the confidence measure. Conclusion: Results showed that the confidence measure had a significant influence on the classification accuracy, and MI data are better suited for anatomical structure labeling than RGB data. Significance: This work significantly enhances the state of art in automatic labeling of endoscopic videos by introducing the use of the confidence metric, and by being the first study to use MI data for in vivo laparoscopic tissue classification. The data of our experiments will be released as the first in vivo MI dataset upon publication of this paper.Comment: 7 pages, 6 images, 2 table

    Crowd disagreement about medical images is informative

    Get PDF
    Classifiers for medical image analysis are often trained with a single consensus label, based on combining labels given by experts or crowds. However, disagreement between annotators may be informative, and thus removing it may not be the best strategy. As a proof of concept, we predict whether a skin lesion from the ISIC 2017 dataset is a melanoma or not, based on crowd annotations of visual characteristics of that lesion. We compare using the mean annotations, illustrating consensus, to standard deviations and other distribution moments, illustrating disagreement. We show that the mean annotations perform best, but that the disagreement measures are still informative. We also make the crowd annotations used in this paper available at \url{https://figshare.com/s/5cbbce14647b66286544}.Comment: Accepted for publication at MICCAI LABELS 201

    Physiological parameter estimation from multispectral images unleashed

    Get PDF
    Multispectral imaging in laparoscopy can provide tissue reflectance measurements for each point in the image at multiple wavelengths of light. These reflectances encode information on important physiological parameters not visible to the naked eye. Fast decoding of the data during surgery, however, remains challenging. While model-based methods suffer from inaccurate base assumptions, a major bottleneck related to competing machine learning-based solutions is the lack of labelled training data. In this paper, we address this issue with the first transfer learning-based method to physiological parameter estimation from multispectral images. It relies on a highly generic tissue model that aims to capture the full range of optical tissue parameters that can potentially be observed in vivo. Adaptation of the model to a specific clinical application based on unlabelled in vivo data is achieved using a new concept of domain adaptation that explicitly addresses the high variance often introduced by conventional covariance-shift correction methods. According to comprehensive in silico and in vivo experiments our approach enables accurate parameter estimation for various tissue types without the need for incorporating specific prior knowledge on optical properties and could thus pave the way for many exciting applications in multispectral laparoscopy

    Transfer Learning for Domain Adaptation in MRI: Application in Brain Lesion Segmentation

    Get PDF
    Magnetic Resonance Imaging (MRI) is widely used in routine clinical diagnosis and treatment. However, variations in MRI acquisition protocols result in different appearances of normal and diseased tissue in the images. Convolutional neural networks (CNNs), which have shown to be successful in many medical image analysis tasks, are typically sensitive to the variations in imaging protocols. Therefore, in many cases, networks trained on data acquired with one MRI protocol, do not perform satisfactorily on data acquired with different protocols. This limits the use of models trained with large annotated legacy datasets on a new dataset with a different domain which is often a recurring situation in clinical settings. In this study, we aim to answer the following central questions regarding domain adaptation in medical image analysis: Given a fitted legacy model, 1) How much data from the new domain is required for a decent adaptation of the original network?; and, 2) What portion of the pre-trained model parameters should be retrained given a certain number of the new domain training samples? To address these questions, we conducted extensive experiments in white matter hyperintensity segmentation task. We trained a CNN on legacy MR images of brain and evaluated the performance of the domain-adapted network on the same task with images from a different domain. We then compared the performance of the model to the surrogate scenarios where either the same trained network is used or a new network is trained from scratch on the new dataset.The domain-adapted network tuned only by two training examples achieved a Dice score of 0.63 substantially outperforming a similar network trained on the same set of examples from scratch.Comment: 8 pages, 3 figure

    Tissue classification for laparoscopic image understanding based on multispectral texture analysis.

    Get PDF
    Intraoperative tissue classification is one of the prerequisites for providing context-aware visualization in computer-assisted minimally invasive surgeries. As many anatomical structures are difficult to differentiate in conventional RGB medical images, we propose a classification method based on multispectral image patches. In a comprehensive ex vivo study through statistical analysis, we show that (1) multispectral imaging data are superior to RGB data for organ tissue classification when used in conjunction with widely applied feature descriptors and (2) combining the tissue texture with the reflectance spectrum improves the classification performance. The classifier reaches an accuracy of 98.4% on our dataset. Multispectral tissue analysis could thus evolve as a key enabling technique in computer-assisted laparoscopy
    corecore